pre-activation input
- North America > United States > Missouri > St. Louis County > St. Louis (0.04)
- Europe > Hungary > Budapest > Budapest (0.04)
- Europe > Germany > Baden-Württemberg > Karlsruhe Region > Heidelberg (0.04)
A Supplementary Material
In what follows, we give some details of content omitted in the paper due to space limit. The proof approach is based on Nagumo's Theorem, which gives necessary and sufficient conditions Definition 2. Let A be a closed set. The following is a fundamental preliminary result for establishing positive invariance. Proposition 2. F or any x 2 @ D, we have T Lemma 2 is a consequence of Proposition 2. For ease of exposition, we first reproduce the lemma First, suppose that condition (i) holds. Next, suppose that condition (ii) holds.
Neural Path Features and Neural Path Kernel : Understanding the role of gates in deep learning
Rectified linear unit (ReLU) activations can also be thought of as'gates', which, either pass or stop their pre-activation input when they are'on' (when the pre-activation input is positive) or'off' (when the pre-activation input is negative) respectively. A deep neural network (DNN) with ReLU activations has many gates, and the on/off status of each gate changes across input examples as well as network weights. For a given input example, only a subset of gates are'active', i.e., on, and the sub-network of weights connected to these active gates is responsible for producing the output. At randomised initialisation, the active sub-network corresponding to a given input example is random. During training, as the weights are learnt, the active sub-networks are also learnt, and could hold valuable information.
- North America > United States > Missouri > St. Louis County > St. Louis (0.04)
- Europe > Hungary > Budapest > Budapest (0.04)
- Europe > Germany > Baden-Württemberg > Karlsruhe Region > Heidelberg (0.04)
Neural Path Features and Neural Path Kernel : Understanding the role of gates in deep learning
Rectified linear unit (ReLU) activations can also be thought of as'gates', which, either pass or stop their pre-activation input when they are'on' (when the pre-activation input is positive) or'off' (when the pre-activation input is negative) respectively. A deep neural network (DNN) with ReLU activations has many gates, and the on/off status of each gate changes across input examples as well as network weights. For a given input example, only a subset of gates are'active', i.e., on, and the sub-network of weights connected to these active gates is responsible for producing the output. At randomised initialisation, the active sub-network corresponding to a given input example is random. During training, as the weights are learnt, the active sub-networks are also learnt, and could hold valuable information. To this end, we encode the on/off state of the gates for a given input in a novel'neural path feature' (NPF), and the weights of the DNN are encoded in a novel'neural path value' (NPV).
Exact Verification of ReLU Neural Control Barrier Functions
Zhang, Hongchao, Wu, Junlin, Vorobeychik, Yevgeniy, Clark, Andrew
Control Barrier Functions (CBFs) are a popular approach for safe control of nonlinear systems. In CBF-based control, the desired safety properties of the system are mapped to nonnegativity of a CBF, and the control input is chosen to ensure that the CBF remains nonnegative for all time. Recently, machine learning methods that represent CBFs as neural networks (neural control barrier functions, or NCBFs) have shown great promise due to the universal representability of neural networks. However, verifying that a learned CBF guarantees safety remains a challenging research problem. This paper presents novel exact conditions and algorithms for verifying safety of feedforward NCBFs with ReLU activation functions. The key challenge in doing so is that, due to the piecewise linearity of the ReLU function, the NCBF will be nondifferentiable at certain points, thus invalidating traditional safety verification methods that assume a smooth barrier function. We resolve this issue by leveraging a generalization of Nagumo's theorem for proving invariance of sets with nonsmooth boundaries to derive necessary and sufficient conditions for safety. Based on this condition, we propose an algorithm for safety verification of NCBFs that first decomposes the NCBF into piecewise linear segments and then solves a nonlinear program to verify safety of each segment as well as the intersections of the linear segments. We mitigate the complexity by only considering the boundary of the safe region and by pruning the segments with Interval Bound Propagation (IBP) and linear relaxation. We evaluate our approach through numerical studies with comparison to state-of-the-art SMT-based methods. Our code is available at https://github.com/HongchaoZhang-HZ/exactverif-reluncbf-nips23.
- North America > United States > Missouri > St. Louis County > St. Louis (0.04)
- North America > United States > New York > Richmond County > New York City (0.04)
- North America > United States > New York > Queens County > New York City (0.04)
- (6 more...)